153 research outputs found
HYDRA: Hybrid Deep Magnetic Resonance Fingerprinting
Purpose: Magnetic resonance fingerprinting (MRF) methods typically rely on
dictio-nary matching to map the temporal MRF signals to quantitative tissue
parameters. Such approaches suffer from inherent discretization errors, as well
as high computational complexity as the dictionary size grows. To alleviate
these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting
approach, referred to as HYDRA.
Methods: HYDRA involves two stages: a model-based signature restoration phase
and a learning-based parameter restoration phase. Signal restoration is
implemented using low-rank based de-aliasing techniques while parameter
restoration is performed using a deep nonlocal residual convolutional neural
network. The designed network is trained on synthesized MRF data simulated with
the Bloch equations and fast imaging with steady state precession (FISP)
sequences. In test mode, it takes a temporal MRF signal as input and produces
the corresponding tissue parameters.
Results: We validated our approach on both synthetic data and anatomical data
generated from a healthy subject. The results demonstrate that, in contrast to
conventional dictionary-matching based MRF techniques, our approach
significantly improves inference speed by eliminating the time-consuming
dictionary matching operation, and alleviates discretization errors by
outputting continuous-valued parameters. We further avoid the need to store a
large dictionary, thus reducing memory requirements.
Conclusions: Our approach demonstrates advantages in terms of inference
speed, accuracy and storage requirements over competing MRF method
Hardware-Limited Task-Based Quantization
Quantization plays a critical role in digital signal
processing systems. Quantizers are typically designed to obtain
an accurate digital representation of the input signal, operating
independently of the system task, and are commonly implemented
using serial scalar analog-to-digital converters (ADCs). In this
work, we study hardware-limited task-based quantization, where
a system utilizing a serial scalar ADC is designed to provide a suitable representation in order to allow the recovery of a parameter
vector underlying the input signal. We propose hardware-limited
task-based quantization systems for a fixed and finite quantization
resolution, and characterize their achievable distortion. We then
apply the analysis to the practical setups of channel estimation
and eigen-spectrum recovery from quantized measurements. Our
results illustrate that properly designed hardware-limited systems
can approach the optimal performance achievable with vector
quantizers, and that by taking the underlying task into account,
the quantization error can be made negligible with a relatively
small number of bits
Task-Based Quantization for Massive MIMO Channel Estimation
Massive multiple-input multiple-output (MIMO) systems are
the focus of increasing research attention. In such setups, there is an
urgent need to utilize simple low-resolution quantizers, due to power
and memory constraints. In this work we study massive MIMO channel
estimation with quantized measurements, when the quantization system
is designed to minimize the channel estimation error, as opposed to
the quantization distortion. We first consider vector quantization, and
characterize the minimal error achievable. Next, we focus on practical
systems utilizing scalar uniform quantizers, and design the analog
and digital processing as well as the quantization dynamic range to
optimize the channel estimation accuracy. Our results demonstrate that
the resulting massive MIMO system which utilizes low-resolution scalar
quantizers can approach the minimal estimation error dictated by ratedistortion theory, achievable using vector quantizers
Magnetic Resonance Fingerprinting Using a Residual Convolutional Neural Network
Conventional dictionary matching based MR Fingerprinting (MRF) reconstruction approaches suffer from time-consuming operations that map temporal MRF signals to quantitative tissue parameters. In this paper, we design a 1-D residual convolutional neural network to perform the signature-to-parameter mapping in order to improve inference speed and accuracy. In particular, a 1-D convolutional neural network with shortcuts, a.k.a skip connections, for residual learning is developed using a TensorFlow platform. To avoid the requirement for a large amount of MRF data, the designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady state precession (FISP) sequences. The proposed approach was validated on both synthetic data and phantom data generated from a healthy subject. The reconstruction performance demonstrates a significantly improved speed - only 1.6s for reconstructing a pair of T1/T2 maps of size 128 Ă— 128 - 50Ă— faster than the original dictionary matching based method. The better performance was also confirmed by improved signal to noise ratio (SNR) and reduced root mean square error (RMSE). Furthermore, it is more compact to store a network instead of a large dictionary
Learning-based reconstruction of FRI signals
Finite Rate of Innovation (FRI) sampling theory enables reconstruction of classes of continuous non-bandlimited signals that have a small number of free parameters from their low-rate discrete samples. This task is often translated into a spectral estimation problem that is solved using methods involving estimating signal subspaces, which tend to break down at a certain peak signal-to-noise ratio (PSNR). To avoid this breakdown, we consider alternative approaches that make use of information from labelled data. We propose two model-based learning methods, including deep unfolding the denoising process in spectral estimation, and constructing an encoder-decoder deep neural network that models the acquisition process. Simulation results of both learning algorithms indicate significant improvements of the breakdown PSNR over classical subspace-based methods. While the deep unfolded network achieves similar performance as the classical FRI techniques and outperforms the encoder-decoder network in the low noise regimes, the latter allows to reconstruct the FRI signal even when the sampling kernel is unknown. We also achieve competitive results in detecting pulses from in vivo calcium imaging data in terms of true positive and false positive rate while providing more precise estimations
Cramér-Rao Bound Optimization for Joint Radar-Communication Beamforming
In this paper, we propose multi-input multi-output (MIMO) beamforming designs towards joint radar sensing and multi-user communications. We employ the Cramr-Rao bound (CRB) as a performance metric of target estimation, under both point and extended target scenarios. We then propose minimizing the CRB of radar sensing while guaranteeing a pre-defined level of signal-to-interference-plus-noise ratio (SINR) for each communication user. For the single-user scenario, we derive a closed form for the optimal solution for both cases of point and extended targets. For the multi-user scenario, we show that both problems can be relaxed into semidefinite programming by using the semidefinite relaxation approach, and prove that the global optimum can always be obtained. Finally, we demonstrate numerically that the globally optimal solutions are reachable via the proposed methods, which provide significant gains in target estimation performance over state-of-the-art benchmarks
Integrated Sensing and Communications: Towards Dual-functional Wireless Networks for 6G and Beyond
As the standardization of 5G solidifies, researchers are speculating what 6G will be. The integration of sensing functionality is emerging as a key feature of the 6G Radio Access Network (RAN), allowing for the exploitation of dense cell infrastructures to construct a perceptive network. In this IEEE Journal on Selected Areas in Commmunications (JSAC) Special Issue overview, we provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC). We commence by discussing the interplay between sensing and communications (S&C) from a historical point of view, and then consider the multiple facets of ISAC and the resulting performance gains. By introducing both ongoing and potential use cases, we shed light on the industrial progress and standardization activities related to ISAC. We analyze a number of performance tradeoffs between S&C, spanning from information theoretical limits to physical layer performance tradeoffs, and the cross-layer design tradeoffs. Next, we discuss the signal processing aspects of ISAC, namely ISAC waveform design and receive signal processing. As a step further, we provide our vision on the deeper integration between S&C within the framework of perceptive networks, where the two functionalities are expected to mutually assist each other, i.e., via communication-assisted sensing and sensing-assisted communications. Finally, we identify the potential integration of ISAC with other emerging communication technologies, and their positive impacts on the future of wireless networks
Rate-distortion trade-offs in acquisition of signal parameters
We consider problems where one wishes to represent a parameter associated with a signal source - subject to a certain rate and distortion - based on the observation of a number of realizations of the source signal. By reducing these indirect vector quantization problems to a standard vector quantization one, we provide a bound to the fundamental interplay between the rate and distortion in the large-rate setting. We specialize this characterization to two particular quantization scenarios: i) the representation of the mean of a multivariate Gaussian source; and ii) the representation of the eigen-spectrum of a multivariate Gaussian source. Numerical results compare our quantization approach to an approach where one recovers the parameters from the representation of the source signals itself: in addition to revealing that the characterization is sharp in the large-rate setting, the results also show that our approach offers considerable gains
Coupled Dictionary Learning for Multi-contrast MRI Reconstruction
Magnetic resonance (MR) imaging tasks often involve multiple contrasts, such as T1-weighted, T2-weighted and Fluid-attenuated inversion recovery (FLAIR) data. These contrasts capture information associated with the same underlying anatomy and thus exhibit similarities in either structure level or gray level. In this paper, we propose a Coupled Dictionary Learning based multi-contrast MRI reconstruction (CDLMRI) approach to leverage the dependency correlation between different contrasts for guided or joint reconstruction from their under-sampled k-space data. Our approach iterates between three stages: coupled dictionary learning, coupled sparse denoising, and enforcing k-space consistency. The first stage learns a set of dictionaries that not only are adaptive to the contrasts, but also capture correlations among multiple contrasts in a sparse transform domain. By capitalizing on the learned dictionaries, the second stage performs coupled sparse coding to remove the aliasing and noise in the corrupted contrasts. The third stage enforces consistency between the denoised contrasts and the measurements in the k-space domain. Numerical experiments, consisting of retrospective under-sampling of various MRI contrasts with a variety of sampling schemes, demonstrate that CDLMRI is capable of capturing structural dependencies between different contrasts. The learned priors indicate notable advantages in multi-contrast MR imaging and promising applications in quantitative MR imaging such as MR fingerprinting
Sparsity and cosparsity for audio declipping: a flexible non-convex approach
This work investigates the empirical performance of the sparse synthesis
versus sparse analysis regularization for the ill-posed inverse problem of
audio declipping. We develop a versatile non-convex heuristics which can be
readily used with both data models. Based on this algorithm, we report that, in
most cases, the two models perform almost similarly in terms of signal
enhancement. However, the analysis version is shown to be amenable for real
time audio processing, when certain analysis operators are considered. Both
versions outperform state-of-the-art methods in the field, especially for the
severely saturated signals
- …